An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
The lack of efficient segmentation methods and fully-labeled datasets limits the comprehensive assessment of optical coherence tomography angiography (OCTA) microstructures like retinal vessel network (RVN) and foveal avascular zone (FAZ), which are of great value in ophthalmic and systematic diseases evaluation. Here, we introduce an innovative OCTA microstructure segmentation network (OMSN) by combining an encoder-decoder-based architecture with multi-scale skip connections and the split-attention-based residual network ResNeSt, paying specific attention to OCTA microstructural features while facilitating better model convergence and feature representations. The proposed OMSN achieves excellent single/multi-task performances for RVN or/and FAZ segmentation. Especially, the evaluation metrics on multi-task models outperform single-task models on the same dataset. On this basis, a fully annotated retinal OCTA segmentation (FAROS) dataset is constructed semi-automatically, filling the vacancy of a pixel-level fully-labeled OCTA dataset. OMSN multi-task segmentation model retrained with FAROS further certifies its outstanding accuracy for simultaneous RVN and FAZ segmentation.
translated by 谷歌翻译
We propose a sparse end-to-end multi-person pose regression framework, termed QueryPose, which can directly predict multi-person keypoint sequences from the input image. The existing end-to-end methods rely on dense representations to preserve the spatial detail and structure for precise keypoint localization. However, the dense paradigm introduces complex and redundant post-processes during inference. In our framework, each human instance is encoded by several learnable spatial-aware part-level queries associated with an instance-level query. First, we propose the Spatial Part Embedding Generation Module (SPEGM) that considers the local spatial attention mechanism to generate several spatial-sensitive part embeddings, which contain spatial details and structural information for enhancing the part-level queries. Second, we introduce the Selective Iteration Module (SIM) to adaptively update the sparse part-level queries via the generated spatial-sensitive part embeddings stage-by-stage. Based on the two proposed modules, the part-level queries are able to fully encode the spatial details and structural information for precise keypoint regression. With the bipartite matching, QueryPose avoids the hand-designed post-processes and surpasses the existing dense end-to-end methods with 73.6 AP on MS COCO mini-val set and 72.7 AP on CrowdPose test set. Code is available at https://github.com/buptxyb666/QueryPose.
translated by 谷歌翻译
As many deep anomaly detection models have been deployed in the real-world, interpretable anomaly detection becomes an emerging task. Recent studies focus on identifying features of samples leading to abnormal outcomes but cannot recommend a set of actions to flip the abnormal outcomes. In this work, we focus on interpretations via algorithmic recourse that shows how to act to revert abnormal predictions by suggesting actions on features. The key challenge is that algorithmic recourse involves interventions in the physical world, which is fundamentally a causal problem. To tackle this challenge, we propose an interpretable Anomaly Detection framework using Causal Algorithmic Recourse (ADCAR), which recommends recourse actions and infers counterfactual of abnormal samples guided by the causal mechanism. Experiments on three datasets show that ADCAR can flip the abnormal labels with minimal interventions.
translated by 谷歌翻译
许多数据分析任务在很大程度上依赖对表的深入了解(多维数据)。在整个任务中,都存在表字段 /列的共同使用的元数据属性。在本文中,我们确定了四个这样的分析元数据:测量/维度二分法,公共场作用,语义场类型和默认聚集函数。尽管这些元数据面临不足的监督信号的挑战,利用现有的知识和理解分布。为了将这些元数据推理为原始表,我们提出了多任务元数据模型,该模型将现场分布和知识图信息融合到预训练的表格模型中。对于模型培训和评估,我们通过使用下游任务的各种智能监督来收集分析元数据的大型语料库(来自私人电子表格和公共表格数据集的〜582K表)。我们的最佳模型的精度= 98%,命中率在TOP-1> 67%,精度> 80%和四个分析元数据推理任务的精度= 88%。它的表现优于基于规则,传统机器学习方法和预训练的表格模型的一系列基线。分析元数据模型被部署在流行的数据分析产品中,帮助下游智能功能,例如Insights挖掘,图表 /枢轴表建议和自然语言QA ...
translated by 谷歌翻译
由于NN模型的强大学习能力及其固有的高平行性,基于神经网络(NN)的方法已成为机器人运动计划的有吸引力的方法。尽管目前朝这个方向发展,但以直接和同时的方式对重要的顺序和空间信息的有效捕获和处理仍然相对较小。为了克服挑战并释放神经网络对运动计划任务的潜力,在本文中,我们提出了STP-NET,这是一个端到端的学习框架,可以充分提取并利用重要的时空信息来形成有效的神经信息运动计划者。通过将机器人的移动解释为视频剪辑,机器人运动计划被转换为视频预测任务,STP-NET可以在空间和时间上有效的方式执行。 STP-NET在不同的和看不见的环境之间进行了经验评估,表明,凭借近100%的准确性(又称成功率),STP-NET在计划速度和路径成本方面表现出非常有希望的性能。与现有的基于NN的运动计划者相比,STP-NET在2D随机森林,2D迷宫和3D随机森林环境中至少达到5倍,2.6倍和1.8倍的速度,速度较低。此外,STP-NET可以快速,同时计算多机手运动计划任务中的多个近乎最佳路径
translated by 谷歌翻译
预测不同托卡马克人的破坏是要克服的巨大障碍。未来的Tokamaks在高性能排放时几乎无法忍受中断。很少有高性能的破坏排放几乎无法构成丰富的训练集,这使得当前数据驱动的方法难以获得可接受的结果。能够将在一个Tokamak训练的中断预测模型转移到另一种训练的机器学习方法以解决该问题。关键是一个包含特征提取器的破坏预测模型,该模型能够在Tokamak诊断数据中提取常见的破坏前体痕迹,并具有可转移的破坏分类器。基于上面的问题,该论文首先提出了专门针对Tokamaks上的普通诊断中的破坏前体特征而设计的深融合功能提取器,该特征是根据当前已知的破坏前体,为可转移模型提供了有希望的基础。通过与J-Text上的手动特征提取进行比较,可以证明融合功能提取器。基于在J-TEXT上训练的功能提取器,将中断预测模型转移到East数据中,仅来自East实验的20次放电。该性能与经过1896年出院的模型相当。从其他模型培训方案之间的比较,转移学习表明了其在预测不同托卡马克人的破坏方面的潜力。
translated by 谷歌翻译
为了使婴儿脑瘫(CP)的早期医疗干预,早期诊断出脑损伤至关重要。尽管一般运动评估(GMA)在早期CP检测中显示出令人鼓舞的结果,但它很费力。大多数现有作品都以视频为输入,以对GMA自动化进行烦躁的动作(FMS)分类。这些方法需要对视频进行完整的观察,并且无法本地化包含正常FMS的视频帧。因此,我们提出了一种名为WO-GMA的新颖方法,以在弱监督的在线环境中执行FMS本地化。首先将婴儿体重点作为WO-GMA的输入提取。然后,WO-GMA执行本地时空提取,然后进行两个网络分支,以生成伪夹标签和模型在线操作。凭借剪辑级伪标签,动作建模分支学会以在线方式检测FMS。具有757个不同婴儿视频的数据集上的实验结果表明,WO-GMA可以获得最新的视频级别分类和Cliplevel检测结果。此外,仅需要前20%的视频持续时间才能获得与完全观察到的分类结果,这意味着FMS诊断时间大大缩短了。代码可在以下网址获得:https://github.com/scofiedluo/wo-gma。
translated by 谷歌翻译
对话中的情感认可(ERC)旨在检测给定对话中每种话语的情感。新提出的ERC模型利用了预培训的语言模型(PLM),并具有预训练和微调的范式,以获得良好的性能。但是,这些模型很少利用PLM的优势,并且对于缺乏明确的情感表达的对话而表现不佳。为了充分利用与话语中情感表达相关的潜在知识,我们提出了一种新颖的ERC模型Cisper,并使用新的及时和语言模型(LM)调整范式提出。具体而言,Cisper配备了及时融合与对话者的话语相关的上下文信息和常识,以更有效地实现ERC。我们的广泛实验表明,Cisper在最新的ERC模型中的出色表现以及利用这两种重要及时及时提高信息的有效性。为了方便地重现我们的实验结果,Cisper的Sourcecode和数据集已在https://github.com/deqingyang/cisper上共享。
translated by 谷歌翻译
神经形态计算是一个新兴的研究领域,旨在通过整合来自神经科学和深度学习等多学科的理论和技术来开发新的智能系统。当前,已经为相关字段开发了各种软件框架,但是缺乏专门用于基于Spike的计算模型和算法的有效框架。在这项工作中,我们提出了一个基于Python的尖峰神经网络(SNN)模拟和培训框架,又名Spaic,旨在支持脑启发的模型和算法研究,并与深度学习和神经科学的特征集成在一起。为了整合两个压倒性学科的不同方法,以及灵活性和效率之间的平衡,SpaiC设计采用神经科学风格的前端和深度学习后端结构设计。我们提供了广泛的示例,包括神经回路模拟,深入的SNN学习和神经形态应用,展示了简洁的编码样式和框架的广泛可用性。 Spaic是一个专用的基于SPIKE的人工智能计算平台,它将显着促进新模型,理论和应用的设计,原型和验证。具有用户友好,灵活和高性能,它将有助于加快神经形态计算研究的快速增长和广泛的适用性。
translated by 谷歌翻译